perm filename DARPA[W87,JMC] blob sn#835304 filedate 1987-02-28 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00008 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	darpa[w87,jmc]		Notes for modifying DARPA proposal
C00004 00003	Relevance of proposed research
C00012 00004		Systems for non-monotonic reasoning permit new kinds of
C00014 00005	Often non-monotonic reasoning compounds itself.  You hear that I have
C00016 00006	What kinds of expert systems will require the tools we are developing?
C00018 00007	What kinds of expert systems will require the advances we propose to
C00019 00008	The Common Communication Language
C00022 ENDMK
CāŠ—;
darpa[w87,jmc]		Notes for modifying DARPA proposal

Example of non-monotonic reasoning:

An enemy weapon is presumed to have ammunition unless there is
evidence to the contrary.  Actually, the non-monotonic rule needs to
be more general: ``An enemy weapon is presumed to be operable unless
there is evidence to the contrary''.  This can allow for reasons
for inoperability that have not been thought of.

This example could be improved by relating it more directly to
the naval battle management program.

understanding the finances

modifying the proposal

Carolyn should go to Washington if needed to make Scherlis pay attention.
See if Scherlis is in a position to help with Shankar's support also,
i.e. to include interactive program verification.

The non-monotonic reasoning example probably should concern sequences
of actions.  Rules for a boat crossing a river.  Contexts likewise.
Relevance of proposed research

	The work we are proposing to do has a fundamental character, and
so its relation to other AI work, especially the Defense Department's
goals of applying AI needs to be explained.  The relation is basically
simple.  The present expert system technology has important limitations,
so that only certain kinds of expert system projects are likely to
succeed in the near future.  Our work will help identify these limitations
and extend the technology so as to overcome them.

Extending the capability for non-monotonic reasoning

	Many inferences people and machines make are monotonic in
that when additional assumptions are made the conclusions remain
firm.  Mathematical logical languages until the late 1970s
were always monotonic.  However, human reasoning always included
important non-monotonic components, and so did many computer programs,
although the latter did their non-monotonic reasoning in non-systematic
ways --- for example by giving certain program variables default values that
could be superseded by giving the values explicitly.
The first formal logical non-monotonic reasoning proposals were in
(McCarthy 1977), and the first systematic non-monotonic program
system (independently motivated) was in (Doyle 1978).%[Truth Maintenance paper]

	Here is an example of a non-monotonic rule.  ``An enemy weapon
is presumed to have ammunition unless there is evidence to the contrary''.
A purely monotonic reasoning system would have to list all kinds of
evidence to the contrary that might be admitted, e.g. it was observed
not to fire at excellent targets, a spy told us, we knew how many
shells it had, and it fired that many times, etc.  However, it is
clearly impractical to list all the possibilities in advance, and
there is good evidence that humans don't work that way.  Using the
non-monotonic rule gives us the conclusion unless there is evidence
against it, but is open to new forms of evidence against it that
were not previously contemplated.  A set of rules that allows only
predetermined kinds of evidence finds itself in an inconsistent
position when a new kind of evidence appears.

	Such non-monotonic rules may be built into a program, may
be represented as rules of inference in a logical system or may
be represented as sentences in a logical system.  A given rule is
most easily made efficiently executed by building it into the program in
an ad hoc way, but this makes it hard to change and even harder
to learn.  It is even difficult to tell what the rules are when
they are built in.  Having non-monotonic rules of inference as
in Reiter's [1980] logic of defaults or as in
Doyle's [1978] TMS or de Kleer's 198x [ATMS]
 is an intermediate position.  It isn't hard for a person to
identify them and change them, but they can't be the result
of logical inference.  The most intellectually ambitious way
of representing non-monotonic rules is as sentences of the
language itself.  That way they can be the result of inference
or of receiving a communication or of the same kinds of learning processes
that are used to learn other facts.

	The main result of successes in representing non-monotonic
rules declaratively, i.e. by logical sentences, is generality.
The same facts, learned once or entered once in a database, can
be used subsequently for many purposes.

	To summarize, the object of our work with non-monotonic
reasoning formalism is to make possible including in databases
general facts about the common sense world that can then be used
by reasoning programs needing these facts.  The most important
kinds of facts are those about the effects of actions.
We have concentrated on facts about moving objects from one location
to another and facts about the effects of asking questions, looking
something up in a database, making requests, etc.  Besides such
general common sense facts, common sense reasoning can also be
done with facts about specific domains, such as the one cited
above that enemy weapons may be presumed to have ammunition.

	The practical use of formalized non-monotonic reasoning
requires advances both in the theory and in computing with non-monotonic
formalisms.  The advances in theory are required so that larger
areas of common sense and specialized knowledge can be put in
databases.  Computational advances are required so that programs
can use the knowledge.  The present proposal involves both.


Formalization of Contexts

	This proposed work is quite new; there are no published papers.
Part of the reason the area hasn't been studied before is that it depends
on non-monotonic reasoning, and this has only recently been developed.

	Consider ``The flounder crashed on the beach''.  The word
flounder usually denotes a fish, but it according to certain conventions,
it might also be used to as a nickname for a Soviet fighter plane.
	Systems for non-monotonic reasoning permit new kinds of
representation of facts about the world.  The advantage is greater
generality and {\it elaboration tolerance}.  Here are some examples
concentrating on facts about the effects of actions.

	Suppose we want to include in a database facts about using
a boat to cross a river.  Let us suppose that whoever puts them in
wants to include only generally known facts.  Later someone may want
to add facts about using boats to cross rivers under special
circumstances - for example, the military use of boats.  It is
important that the representation of the general facts tolerate
the addition of new facts, rather than having to be completely
rewritten.

Often non-monotonic reasoning compounds itself.  You hear that I have
a car and reason that it is appropriate to ask me for a ride.  You
then hear that he car is being fixed and withdraw that conclusion.
The conclusion is reinstated when you hear that the car is due out
soon and withdrawn again when you hear that I have already promised
a ride to five people.  Circumscription permits expressing
general knowledge in such a way as warrants such successive
inferences.
val
proposal
Another text that might be worth incorporating or excerpting is in
the second section of "Applications ... " giving the various uses
of non-monotonic reasoning.
What kinds of expert systems will require the tools we are developing?

First, what kinds don't require it?  Limited set of alternatives,
heuristic classification, no prediction or planning.

Consider a system to help a decision maker or planner make sure he isn't
neglecting an important feature that would change his plans if
brought to his attention.

How might the enemy find out what we are up to?

Can we kill two birds with one stone?
It seems to me that the key thing that will help DARPA ISTO get
our proposal smoothly through DARPA office is an identification
of what kinds of expert systems will require the advances we are
trying to make.
What kinds of expert systems will require the advances we propose to
make?

1. Expert systems that must reason about the consequences of actions
and other events.  This includes all three of the main DARPA
sponsored projects --- the  pilot's associate, the autonomous land
vehicle, and naval battle management.

2. Expert systems that must be modifiable to take into account
new facts and rules, especially when there isn't time to go
over the entire database.
The Common Communication Language

(McCarthy 1982) describes a proposed Common Business Communication
Language (CBCL) intended for communication of business messages
among computer programs belonging to different organizations and
not written as part of a single project.  CBCL includes messages
for inquiries and answers concerning availability of items,
their prices, their purchase terms, delivery conditions, etc.
The same principles govern many military messages, especially
those concerned with logistics and status of forces.  Questions
and answers about available airplanes, personnel and supplies
of all kinds require such a language.

It has turned out that the project is not just a matter of
standardizing and computerizing existing protocols.  The main
reason is that such communications among human organizations
involve many non-monotonic conventions, i.e. conventions about
what can be assumed when an item is omitted.  For example,
delivery conditions can be unstated, there can be just a date
stated, there can be a shipping method or there can be a schedule.
The delivery terms can be conditional on future events.  In order
to allow flexible program-to-program communication the language
must provide for all levels of explicitness in the messages together
with default conventions about information left out.

In connection with the common sense database we intend to experiment
with protocols for communicating with it.  These protocols will
be based on the ideas of (McCarthy 1982) which is included as
Appendix B with this proposal.